68 research outputs found
UMS: Live Migration of Containerized Services across Autonomous Computing Systems
Containerized services deployed within various computing systems, such as
edge and cloud, desire live migration support to enable user mobility,
elasticity, and load balancing. To enable such a ubiquitous and efficient
service migration, a live migration solution needs to handle circumstances
where users have various authority levels (full control, limited control, or no
control) over the underlying computing systems. Supporting the live migration
at these levels serves as the cornerstone of interoperability, and can unlock
several use cases across various forms of distributed systems. As such, in this
study, we develop a ubiquitous migration solution (called UMS) that, for a
given containerized service, can automatically identify the feasible migration
approach, and then seamlessly perform the migration across autonomous computing
systems. UMS does not interfere with the way the orchestrator handles
containers and can coordinate the migration without the orchestrator
involvement. Moreover, UMS is orchestrator-agnostic, i.e., it can be plugged
into any underlying orchestrator platform. UMS is equipped with novel methods
that can coordinate and perform the live migration at the orchestrator,
container, and service levels. Experimental results show that for
single-process containers, the service-level approach, and for multi-process
containers with small (< 128 MiB) memory footprint, the container-level
migration approach lead to the lowest migration overhead and service downtime.
To demonstrate the potential of UMS in realizing interoperability and
multi-cloud scenarios, we examined it to perform live service migration across
heterogeneous orchestrators, and between Microsoft Azure and Google CloudComment: Accepted in IEEE Globecom 2023 conferenc
Object as a Service (OaaS): Enabling Object Abstraction in Serverless Clouds
Function as a Service (FaaS) paradigm is becoming widespread and is
envisioned as the next generation of cloud systems that mitigate the burden for
programmers and cloud solution architects. However, the FaaS abstraction only
makes the cloud resource management aspects transparent but does not deal with
the application data aspects. As such, developers have to undergo the burden of
managing the application data, often via separate cloud services (e.g., AWS
S3). Similarly, the FaaS abstraction does not natively support function
workflow, hence, the developers often have to work with workflow orchestration
services (e.g., AWS Step Functions) to build workflows. Moreover, they have to
explicitly navigate the data throughout the workflow. To overcome these
problems of FaaS, we design a higher-level cloud programming abstraction that
hides the complexities and mitigate the burden of developing cloud-native
application development. We borrow the notion of object from object-oriented
programming and propose a new abstraction level atop the function abstraction,
known as Object as a Service (OaaS). OaaS encapsulates the application data and
function into the object abstraction and relieves the developers from resource
and data management burdens. It also unlocks opportunities for built-in
optimization features, such as software reusability, data locality, and
caching. OaaS natively supports dataflow programming such that developers
define a workflow of functions transparently without getting involved in data
navigation, synchronization, and parallelism aspects. We implemented a
prototype of the OaaS platform and evaluated it under real-world settings
against state-of-the-art platforms regarding the imposed overhead, scalability,
and ease of use. The results demonstrate that OaaS streamlines cloud
programming and offers scalability with an insignificant overhead to the
underlying cloud system.Comment: This version of the paper has been significantly altered and the new
observations have been obtained. Therefore, we withdraw the paper until the
new version becomes availabl
- …